In a traditional convolutional layer, the learned filters stay fixed after training. In contrast, we introduce a new framework, the Dynamic Filter Network, where filters are generated dynamically conditioned on an input. We show that this architecture is a powerful one, with increased flexibility thanks to its adaptive nature, yet without an excessive increase in the number of model parameters. A wide variety of filtering operations can be learned this way, including local spatial transformations, but also others like selective (de)blurring or adaptive feature extraction. Moreover, multiple such layers can be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. By visualizing the learned filters, we illustrate that the network has picked up flow information by only looking at unlabelled training data. This suggests that the network can be used to pretrain networks for various supervised tasks in an unsupervised way, like optical flow and depth estimation. * X. Jia and B. De Brabandere contributed equally to this work and listed in alphabetical order.
translated by 谷歌翻译
语言模型是通过有限的输入集定义的,当我们尝试扩展支持语言的数量时,该输入会产生词汇瓶颈。解决此瓶颈会导致在嵌入矩阵中可以表示的与输出层中的计算问题之间的权衡。本文介绍了基于像素的语言编码器Pixel,这两个问题都没有遭受这些问题的影响。 Pixel是一种验证的语言模型,可将文本作为图像呈现,使基于拼字法相似性或像素的共激活的语言传输表示形式。 Pixel经过训练可以重建蒙版贴片的像素,而不是预测令牌上的分布。我们在与BERT相同的英语数据上为8600万参数像素模型预告,并对包括各种非拉丁语脚本在内的类型上多样化的语言中的句法和语义任务进行了评估。我们发现,Pixel在预读取数据中找不到的脚本上的句法和语义处理任务大大优于BERT,但是在使用拉丁文脚本时,Pixel比BERT稍弱。此外,我们发现像素对嘈杂的文本输入比bert更强大,进一步证实了用像素建模语言的好处。
translated by 谷歌翻译
在本文中,我们呈现AIDA,它是一种积极推断的代理,可以通过与人类客户端的互动来迭代地设计个性化音频处理算法。 AIDA的目标应用是在助听器(HA)算法的调整参数的情况下,每当HA客户端对其HA性能不满意时,提出了最有趣的替代值。 AIDA解释搜索“最有趣的替代品”作为最佳(声学)背景感知贝叶斯试验设计的问题。在计算术语中,AIDA被实现为基于有源推断的药剂,具有预期的试验设计的自由能标准。这种类型的建筑受到高效(贝叶斯)试验设计的神经经济模型的启发,并意味着AIDA包括用于声学信号和用户响应的生成概率模型。我们提出了一种用于声学信号的新型生成模型作为基于高斯过程分类器的时变自自回归滤波器和用户响应模型的总和。已经在生成模型的因子图中实施了完整的AIDA代理,并且通过对因子图的变分消息来实现所有任务(参数学习,声学上下文分类,试验设计等)。所有验证和验证实验和演示都可以在我们的GitHub存储库中自由访问。
translated by 谷歌翻译
我们将反应性消息传递(RMP)作为框架,用于在概率模型的因子图表示中执行基于时间表,鲁棒和可扩展的消息通过的基于消息传递的推断。 RMP基于反应性编程风格,该样式仅描述因子图中的节点如何对连接节点中的更改作出反应。没有固定消息传递计划提高推理过程的稳健性,可伸缩性和执行时间。我们还存在ReactiveMp.jl,这是一个Julia包,用于通过最小化约束的自由能实现RMP。通过用户定义的本地表单和分解约束对变分后部分布的结构,ReastiveMp.jl执行混合消息传递算法,包括信仰传播,变分消息通过,期望传播和期望最大化更新规则。实验结果表明,与其他概率模型的贝叶斯推断的其他朱莉娅封装相比,基于Reactivemp的RMP的性能提高。特别是,我们表明RMP框架能够为大型概率状态空间模型运行贝叶斯人推断,并在标准膝上型计算机上具有数十万个随机变量。
translated by 谷歌翻译
社交媒体的普及创造了仇恨言论和性别歧视等问题。社交媒体中性别歧视的识别和分类是非常相关的任务,因为它们允许建立更健康的社会环境。尽管如此,这些任务很挑战。这项工作提出了一种使用多语种和单晶的BERT和数据点转换和与英语和西班牙语分类的策略的系统来使用多语种和单语的BERT和数据点转换和集合策略。它在社交网络中的性别歧视的背景下进行了2021年(存在2021年)任务,由Iberian语言评估论坛(Iberlef)提出。描述了所提出的系统及其主要组件,并进行深入的超公数分析。观察到的主要结果是:(i)该系统比基线模型获得了更好的结果(多语种伯爵); (ii)集合模型比单声道模型获得了更好的结果; (iii)考虑所有单独模型和最佳标准化值的集合模型获得了两个任务的最佳精度和F1分数。这项工作在两个任务中获得的第一名,最高的精度(任务1和任务2的0.658.780)和F1分数(对于任务1的任务1和F1-宏为0.780的F1二进制)。
translated by 谷歌翻译
本文介绍了我们参与西班牙语(戒毒)共享任务2021的评论中毒性的检测,在伊比利亚语语言评估论坛的第三次研讨会上。共享任务分为两个相关的分类任务:(i)任务1:毒性检测和; (ii)任务2:毒性水平检测。他们专注于毒性评论的传播加剧了仇外问题,在与移民有关的不同在线新闻文章中发布。减轻这个问题的必要努力之一是检测评论中的毒性。我们的主要目标是在竞赛的官方指标基于竞争的官方指标:任务1的F1分数和任务2的亲密评估度量(CEM)的F1分数以及任务2的CO-Score 。要解决任务,我们使用两种类型的机器学习模型:(i)统计模型和(ii)用于语言理解(BERT)模型的深双双向变压器。我们在使用BETO的两个任务中获得了最佳结果,这是一款位于大型西班牙语法上的BERT模型。我们在任务1中获得了第三名官方排名,F1分数为0.5996,我们在任务2官方排名的第6位与0.7142的CEM达成了第6位。我们的结果表明:(i)伯特模型获得比文本评论中毒性检测的统计模型更好的结果; (ii)单语伯特模型在其预先训练的语言中的文本评论中具有多语言伯特模型的优势。
translated by 谷歌翻译
标记数据可以是昂贵的任务,因为它通常由域专家手动执行。对于深度学习而言,这是繁琐的,因为它取决于大型标记的数据集。主动学习(AL)是一种范式,旨在通过仅使用二手车型认为最具信息丰富的数据来减少标签努力。在文本分类设置中,在AL上完成了很少的研究,旁边没有涉及最近的最先进的自然语言处理(NLP)模型。在这里,我们介绍了一个实证研究,可以将基于不确定性的基于不确定性的算法与Bert $ _ {base} $相比,作为使用的分类器。我们评估两个NLP分类数据集的算法:斯坦福情绪树木银行和kvk-Front页面。此外,我们探讨了旨在解决不确定性的al的预定问题的启发式;即,它是不可规范的,并且易于选择异常值。此外,我们探讨了查询池大小对al的性能的影响。虽然发现,AL的拟议启发式没有提高AL的表现;我们的结果表明,使用BERT $ _ {Base} $概率使用不确定性的AL。随着查询池大小变大,性能的这种差异可以减少。
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
Due to the environmental impacts caused by the construction industry, repurposing existing buildings and making them more energy-efficient has become a high-priority issue. However, a legitimate concern of land developers is associated with the buildings' state of conservation. For that reason, infrared thermography has been used as a powerful tool to characterize these buildings' state of conservation by detecting pathologies, such as cracks and humidity. Thermal cameras detect the radiation emitted by any material and translate it into temperature-color-coded images. Abnormal temperature changes may indicate the presence of pathologies, however, reading thermal images might not be quite simple. This research project aims to combine infrared thermography and machine learning (ML) to help stakeholders determine the viability of reusing existing buildings by identifying their pathologies and defects more efficiently and accurately. In this particular phase of this research project, we've used an image classification machine learning model of Convolutional Neural Networks (DCNN) to differentiate three levels of cracks in one particular building. The model's accuracy was compared between the MSX and thermal images acquired from two distinct thermal cameras and fused images (formed through multisource information) to test the influence of the input data and network on the detection results.
translated by 谷歌翻译
The advances in Artificial Intelligence are creating new opportunities to improve lives of people around the world, from business to healthcare, from lifestyle to education. For example, some systems profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such predictions impact the life of the user directly or indirectly (e.g., loan disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems are also increasing. To address these concerns, such systems are mandated to be responsible i.e., transparent, fair, and explainable to developers and end-users. In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model, such as robustness, drift susceptibility, and fairness, and (c) compare different models (from different model families or obtained through different hyperparameter settings) from an overall perspective thereby facilitating actionable recourse for improvement of the models. It is model agnostic and works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and frameworks. It can be seamlessly integrated with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems.
translated by 谷歌翻译